skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Editors contains: "Hunter, SR"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Lam, H; Azar, E; Batur, D; Gao, S; Xie, W; Hunter, SR; Rossetti, MD (Ed.)
    Plausible inference is a growing body of literature that treats stochastic simulation as a gray box when structural properties of the simulation output performance measures as a function of design, decision or contextual variables are known. Plausible inference exploits these properties to allow the outputs from values of decision variables that have been simulated to provide inference about output performance measures at values of decision variables that have not been simulated; statements about the possible optimality or feasibility are examples. Lipschitz continuity is a structural property of many simulation problems. Unfortunately, the all-important—and essential for plausible inference—Lipschitz constant is rarely known. In this paper we show how to obtain plausible inference with an estimated Lipschitz constant that is also derived by plausible inference reasoning, as well as how to create the experiment design to simulate. 
    more » « less
    Free, publicly-accessible full text available December 8, 2025
  2. Corlu, CG; Hunter, SR; Lam, H; Onggo, BS; Shortle, J; Biller, B. (Ed.)
    Calibration is a crucial step for model validity, yet its representation is often disregarded. This paper proposes a two-stage approach to calibrate a model that represents target data by identifying multiple diverse parameter sets while remaining computationally efficient. The first stage employs a black-box optimization algorithm to generate near-optimal parameter sets, the second stage clusters the generated parameter sets. Five black-box optimization algorithms, namely, Latin Hypercube Sampling (LHS), Sequential Model-based Algorithm Configuration (SMAC), Optuna, Simulated Annealing (SA), and Genetic Algorithm (GA), are tested and compared using a disease-opinion compartmental model with predicted health outcomes. Results show that LHS and Optuna allow more exploration and capture more variety in possible future health outcomes. SMAC, SA, and GA, are better at finding the best parameter set but their sampling approach generates less diverse model outcomes. This two-stage approach can reduce computation time while producing robust and representative calibration. 
    more » « less